Online Studies
More and more experiments, surveys, and qualitative studies are being conducted online. Recognizing the significance of this shift, the BRL hopes to help researchers explore and leverage the various online research tools and services available today. While we encourage researchers at MIT to conduct lab and online studies through the BRL, we understand that other avenues of data collection may be more suitable for certain projects. This webpage introduces two of these avenues: research panels and crowdsourcing platforms.
Please contact the BRL Coordinator at lab-manager@mit.edu if you have any questions or requests regarding online data collection. In particular, please reach out to the BRL Coordinator if you need the following:
- Guidance on choosing the right panel vendor or crowdsourcing platform for your project
- Assistance in contacting a panel vendor to obtain information or to launch a study
- Technical support in using a particular crowdsourcing platform
- General advice on conducting online studies
Research Panels
There are numerous research panels in the U.S. today, with sizes ranging from a few hundred to several million members. Panels are comprised of individuals who have expressed willingness to participate in research activities conducted by or through a particular company or organization. Newly registered panelists provide detailed demographic, psychographic, and behavioral information by completing a “profiling survey”, which gives researchers the ability to limit the target audience of their studies to individuals with certain characteristics.
Typically, when a panel vendor receives a new research request, a project manager sets up an initial meeting with the researcher to discuss the details of the proposed study. The project manager then provides a quote of how much the research project is expected to cost. Once the researcher accepts the quote, the project manager begins recruiting participants and continues to work with the researcher as the study progresses.
The cost of conducting a study through a panel vendor is mainly determined by the following factors:
- Desired sample size — The number of panelists the researcher hopes to have in the final sample.
- Eligibility criteria — The requirements panelists must meet to qualify for the study. Stricter requirements usually result in higher costs.
- Study duration — The amount of time each panelist is likely to spend on the research activity.
- Mode of data collection — The data collection method(s) the researcher plans to use. It is generally cheaper to conduct studies online than by phone, by mail, or in person.
- Add-on services — Many panel vendors not only assist researchers with data collection, but also provide a range of other services, such as research design consultation, survey programming, and data analysis. Researchers can use these services by paying additional fees.
The BRL has built a connection with three popular panel vendors: Dynata, NORC, and Qualtrics. Details about each of these vendors can be found below.
We especially encourage researchers to conduct studies through Dynata, our preferred panel vendor. As a result of our partnership with the company, MIT researchers can now enjoy a 15% discount when using the Dynata panel for online studies. Please contact us if you would like to take advantage of this special offer or learn more about Dynata’s pricing.
Research Panel Vendors
-
Dynata is one of the world’s leading providers of first-party data contributed by consumers and business professionals. With a reach that encompasses 60+ million people globally and an extensive library of individual profile attributes collected through surveys, Dynata is the cornerstone for precise, trustworthy quality data. The company has built innovative data services and solutions around its core first-party data offering to bring the voice of the customer to the entire marketing spectrum, from market research to marketing and advertising. Dynata serves nearly 6,000 market research agencies, media and advertising agencies, consulting and investment firms, and healthcare and corporate customers in North America, South America, Europe, and Asia-Pacific.
Dynata also supports a variety of telephone-based data collection methods, such as RDD, mobile-only RDD, directory sampling, demographic targeting, and CATI. Other research services offered by Dynata include:
- Mobile surveys, mail surveys, and in-person interviews
- Survey design, hosting, and programming
- Data processing, analysis, and reporting
- Self-service online samples
Dynata is committed to delivering high-quality data to researchers. It has adopted a series of quality control measures designed to verify panelist identity, flag duplicate accounts, and detect satisficing behavior (e.g., speeding, straight-lining) in web surveys.
-
NORC, a research institution based at the University of Chicago, manages a probability-based panel called AmeriSpeak, which consists of about 30,000 households across the United States. AmeriSpeak panel members typically participate in web-based or phone-based studies two to three times a month.
NORC supports various types of research activities, including general population surveys, surveys targeting low-incidence groups, longitudinal studies, area studies, studies involving physiological measures, and qualitative studies (e.g., focus groups, interviews). If needed, NORC can combine the AmeriSpeak panel with other probability-based or non-probability panels to obtain larger sample sizes. In addition, NORC sends out an omnibus survey every month to 1,000 adults drawn from the AmeriSpeak panel. Each survey contains questions submitted by different researchers who share the cost of using the omnibus service.
Aside from research data and participant demographic information, NORC also provides clients with sample quality statistics such as response and completion rates, margins of error, bias measurements, and representativeness calculations. Moreover, NORC can help researchers process data, perform statistical analyses, create summary reports, and generate other optional deliverables.
-
Researchers who use Qualtrics to create surveys can collect data directly through the company, which has access to millions of online panelists worldwide. Qualtrics is a panel aggregator, meaning that it does not have a panel of its own, but instead draws samples from third-party panel vendors.
Qualtrics provides a comprehensive set of research services to its clients, ranging from survey design and programming to data collection and processing to statistical analysis and reporting. Researchers can request any combination of these services depending on their needs. Modes of data collection supported by Qualtrics include standard and omnibus online surveys, IVR phone surveys, CATI, online and in-person focus groups, and online interviews.
Please note that the Qualtrics licenses held by MIT and Sloan only allow affiliated individuals to build surveys for free. The licenses do not cover any of the research services mentioned above.
-
Besides Dynata, NORC, and Qualtrics, the following panel vendors (listed in alphabetical order) may also be worth exploring:
Crowdsourcing Platforms
Crowdsourcing is the recruitment of a large, often unassociated group of individuals to collectively undertake a project, typically over the Internet. These individuals may be asked to generate ideas, process data, create content, solve complex problems, or, as discussed below, participate in online research studies. Much like how companies use online job boards to advertise employment opportunities, one can visit certain websites to recruit Internet users for different types of tasks. These websites, known as crowdsourcing platforms, give researchers easy, on-demand access to prospective participants from all around the world. This section provides an overview of four crowdsourcing platforms: Amazon Mechanical Turk (MTurk), CloudResearch (formerly TurkPrime), psiTurk, and Prolific.
Crowdsourcing Vendors
-
Mechanical Turk (MTurk) is a widely used crowdsourcing platform developed by Amazon in 2005. The website features a large number of “human intelligence tasks” (HITs), which are usually tasks that humans can perform more accurately or efficiently than computers. Common types of HITs include categorizing images, transcribing media files, extracting information from websites, and recording the contents of receipts.
A “requester” is an individual, business, or organization that creates HITs on MTurk. Many companies now use MTurk regularly to outsource their services, collect data for machine learning, solicit opinions from potential customers, and so on. In recent years, MTurk has also become increasingly popular among researchers in the social sciences.
People from all around the world can sign up to be MTurk “workers” (or “Turkers”) and complete HITs for monetary compensation. However, not everyone who signs up is granted access to MTurk, as all worker accounts must first be approved by Amazon. Over time, a small portion of users become “master workers” — that is, workers who have “consistently demonstrated a high degree of success in performing a wide range of HITs across a large number of requesters”. Amazon has not disclosed the specific criteria it uses for determining who can receive the master worker designation, or even who can join the workforce in the first place. When setting up HITs on MTurk, a requester can target a subset of the worker population based on location, performance history, master worker status, and other “qualifications” provided by the system or previously assigned by the requester.
Each HIT on MTurk represents an individual task that can typically be done by a human rather quickly (e.g., labeling an image, finding the address of a restaurant). However, some HITs, such as transcribing an audio track or answering a survey, may take longer to complete. In most cases, requesters would create a “project” (also known as a “HIT group”, a “HIT type”, or a “batch”) that contains many HITs of the same kind. Furthermore, a single HIT can have multiple “assignments”. This allows more than one worker to complete each HIT, which is useful to requesters hoping to seek consensus or different viewpoints on certain subjects.
Requesters have the option to post HITs and manage workers through MTurk’s graphical user interface (GUI), command line interface (CLI), or application programming interface (API). The GUI, which can be accessed by logging into the MTurk requester site, enables users to build HITs visually from a set of HTML templates. If a requester hopes to conduct a study hosted on an external website (e.g., Qualtrics), he or she can simply create a HIT in the GUI with a link to the study. By contrast, the CLI requires requesters to interact with MTurk programmatically. Although the GUI is easier to use, the CLI offers more features, including the ability to add “qualification tests” that workers must pass before they can complete certain HITs. Finally, requesters can use the API to perform all the operations available in the GUI and the CLI, as well as some additional functions, such as emailing workers. One of the main advantages of the API is that it can be integrated into business applications and back-end systems, which allows companies to automatically generate HITs, process data, pay workers, etc.
Amazon gives requesters the freedom to decide how much workers can earn for each HIT and to deny payment for incomplete or unsatisfactory work. However, a requester who frequently underpays workers or rejects HITs will likely acquire a poor reputation on popular MTurk forums such as Turkopticon and Turker Nation, which may in turn lead to lower HIT completion rates. After receiving payment on MTurk, workers can transfer their earnings to a bank account or to an Amazon.com gift card. Aside from worker remuneration, Amazon charges requesters a 20% service fee, plus an additional 20% fee for HITs with 10 or more assignments. Requesters must also pay extra fees for recruiting master workers and for posting HITs with certain qualifications.
Please be sure to explore the knowledge base at the bottom of this webpage for MTurk-related tutorials, support documentation, and articles. In addition, we encourage you to utilize the MTurk sandbox, which is a simulated environment where requesters can create HITs without spending any real money and preview the HITs from the perspective of a worker.
-
CloudResearch is a web-based application that enables researchers to collect data on MTurk with greater efficiency and flexibility. Through the CloudResearch web interface, researchers can perform many advanced MTurk functions that would otherwise require API programming (see the “MTurk” tab for details). To launch a study using CloudResearch, a researcher must first build a survey or experiment on an external website, such as Qualtrics. The researcher can then set up a HIT on CloudResearch by providing the study URL and by configuring various settings. Once the HIT is submitted, it becomes visible to workers on MTurk. While data collection is in progress, the researcher has the ability to monitor and manage the HIT through the CloudResearch web interface.
By creating a CloudResearch account with an academic email address, researchers can use the following features for free:
- Include or exclude workers from previous studies
- Target workers with certain demographic, psychographic, or behavioral characteristics
- Edit the properties of a HIT after it is posted
- Refresh an active HIT in order to bump it up the list of available HITs
- Approve HITs and pay workers automatically
- Send emails or grant bonuses to a group of workers
- View HIT statistics such as dropout rates and completion times
CloudResearch also offers a few paid features, including “microbatch” and “hyperbatch”, which can be especially useful to researchers who need to recruit many workers for a single HIT (e.g., a survey). The microbatch feature automatically (1) creates multiple copies of a HIT, (2) assigns a small number of workers to each copy, (3) publishes HIT copies at specified time intervals, and (4) prevents workers from completing the HIT more than once. The main purpose of implementing microbatch is to reduce potential sampling bias, as workers who complete a HIT on a particular day or during particular hours may differ significantly from those who do so at other times. Hyperbatch works in a similar manner, except that it publishes HIT copies all at once rather than separately. While this may result in a less representative sample, it increases the visibility of the HIT on MTurk, which can lead to higher participation rates and shorter data collection times. Moreover, both microbatch and hyperbatch allow researchers to assign fewer than 10 workers to each HIT, thereby eliminating the 20% fee that MTurk requesters must pay if their HITs have 10 or more assignments.
Besides providing the core services mentioned above, CloudResearch also has a project management team that can help researchers conduct fully managed MTurk studies, collect data through external online panels, and create other custom research solutions.
-
psiTurk is a free, open-source platform designed to facilitate the administration of online experiments on MTurk. Unlike CloudResearch, psiTurk does not have a web interface where researchers can deploy and manage HITs. Instead, it requires users to communicate with MTurk through a command line interface (CLI), which is distinct from the MTurk CLI provided by Amazon. The psiTurk CLI only runs on UNIX systems (e.g., Mac OS, Linux) and therefore would not operate on Windows computers, though there are several ways a researcher can overcome this limitation. psiTurk not only offers many of the basic features found in other MTurk-based tools, but also has the ability to perform some special operations, such as randomly assigning workers to experimental conditions, excluding users of certain web browsers or devices (e.g., phone, tablet), and keeping track of when a worker switches to a different browser window or tab during an experiment.
psiTurk is best suited for individuals with some web programming experience, as researchers must build their own online experiments using HTML, CSS, and JavaScript before posting those experiments on MTurk. The psiTurk website contains thorough documentation on the key components and functionality of the platform, as well as an “Experiment Exchange” section where researchers can share and download psiTurk-compatible experiment files.
-
Launched in 2014 by a group of graduate students from the University of Sheffield and the University of Oxford in the U.K., Prolific (or Prolific Academic) is a crowdsourcing platform dedicated solely to academic research. It serves as an alternative to MTurk, which, despite its growing popularity in the academic community, was built primarily to meet the needs of businesses, especially those that use the platform to collect large amounts of data as part of their day-to-day operations.
Prolific offers a simple web interface where researchers can post the URLs of externally hosted surveys and experiments. Researchers have the option to narrow down their target population by filtering out participants from previous studies, by excluding or including specific participants, by targeting participants with certain personal attributes, or by creating custom prescreen questions.
In contrast to MTurk, which allows researchers to freely decide how much they would like to pay participants, Prolific enforces a minimum compensation amount of $6.50/hour. This amount is prorated based on the length of a given study, which is initially estimated by the researcher but later replaced by the average study completion time of real participants. Furthermore, if a researcher unreasonably denies payment to a participant, the participant can make an appeal by contacting Prolific staff members, who may then overturn the decision.
In addition to participant compensation, researchers must pay a 30% service fee when conducting studies through Prolific.